64 research outputs found

    Reducing the Number of Annotations in a Verification-oriented Imperative Language

    Full text link
    Automated software verification is a very active field of research which has made enormous progress both in theoretical and practical aspects. Recently, an important amount of research effort has been put into applying these techniques on top of mainstream programming languages. These languages typically provide powerful features such as reflection, aliasing and polymorphism which are handy for practitioners but, in contrast, make verification a real challenge. In this work we present Pest, a simple experimental, while-style, multiprocedural, imperative programming language which was conceived with verifiability as one of its main goals. This language forces developers to concurrently think about both the statements needed to implement an algorithm and the assertions required to prove its correctness. In order to aid programmers, we propose several techniques to reduce the number and complexity of annotations required to successfully verify their programs. In particular, we show that high-level iteration constructs may alleviate the need for providing complex loop annotations.Comment: 15 pages, 8 figure

    The DynAlloy Visualizer

    Full text link
    We present an extension to the DynAlloy tool to navigate DynAlloy counterexamples: the DynAlloy Visualizer. The user interface mimics the functionality of a programming language debugger. Without this tool, a DynAlloy user is forced to deal with the internals of the Alloy intermediate representation in order to debug a flaw in her model.Comment: In Proceedings LAFM 2013, arXiv:1401.056

    Model Checker Execution Reports

    Get PDF
    Software model checking constitutes an undecidable problem and, as such, even an ideal tool will in some cases fail to give a conclusive answer. In practice, software model checkers fail often and usually do not provide any information on what was effectively checked. The purpose of this work is to provide a conceptual framing to extend software model checkers in a way that allows users to access information about incomplete checks. We characterize the information that model checkers themselves can provide, in terms of analyzed traces, i.e. sequences of statements, and safe cones, and present the notion of execution reports, which we also formalize. We instantiate these concepts for a family of techniques based on Abstract Reachability Trees and implement the approach using the software model checker CPAchecker. We evaluate our approach empirically and provide examples to illustrate the execution reports produced and the information that can be extracted

    On Verifying Resource Contracts using Code Contracts

    Full text link
    In this paper we present an approach to check resource consumption contracts using an off-the-shelf static analyzer. We propose a set of annotations to support resource usage specifications, in particular, dynamic memory consumption constraints. Since dynamic memory may be recycled by a memory manager, the consumption of this resource is not monotone. The specification language can express both memory consumption and lifetime properties in a modular fashion. We develop a proof-of-concept implementation by extending Code Contracts' specification language. To verify the correctness of these annotations we rely on the Code Contracts static verifier and a points-to analysis. We also briefly discuss possible extensions of our approach to deal with non-linear expressions.Comment: In Proceedings LAFM 2013, arXiv:1401.056

    Summary-based inference of quantitative bounds of live heap objects

    Get PDF
    This article presents a symbolic static analysis for computing parametric upper bounds of the number of simultaneously live objects of sequential Java-like programs. Inferring the peak amount of irreclaimable objects is the cornerstone for analyzing potential heap-memory consumption of stand-alone applications or libraries. The analysis builds method-level summaries quantifying the peak number of live objects and the number of escaping objects. Summaries are built by resorting to summaries of their callees. The usability, scalability and precision of the technique is validated by successfully predicting the object heap usage of a medium-size, real-life application which is significantly larger than other previously reported case-studies.Fil: Braberman, Victor Adrian. Universidad de Buenos Aires. Facultad de Ciencias Exactas y Naturales. Departamento de Computación; Argentina. Consejo Nacional de Investigaciones Científicas y Técnicas; ArgentinaFil: Garbervetsky, Diego David. Universidad de Buenos Aires. Facultad de Ciencias Exactas y Naturales. Departamento de Computación; Argentina. Consejo Nacional de Investigaciones Científicas y Técnicas; ArgentinaFil: Hym, Samuel. Universite Lille 3; FranciaFil: Yovine, Sergio Fabian. Universidad de Buenos Aires. Facultad de Ciencias Exactas y Naturales. Departamento de Computación; Argentina. Consejo Nacional de Investigaciones Científicas y Técnicas; Argentin

    Dynamic Slicing by On-demand Re-execution

    Full text link
    In this paper, we propose a novel approach that aims to offer an alternative to the prevalent paradigm to dynamic slicing construction. Dynamic slicing requires dynamic data and control dependencies that arise in an execution. During a single execution, memory reference information is recorded and then traversed to extract dependencies. Execute-once approaches and tools are challenged even by executions of moderate size of simple and short programs. We propose to shift practical time complexity from execution size to slice size. In particular, our approach executes the program multiple times while tracking targeted information at each execution. We present a concrete algorithm that follows an on-demand re-execution paradigm that uses a novel concept of frontier dependency to incrementally build a dynamic slice. To focus dependency tracking, the algorithm relies on static analysis. We show results of an evaluation on the SV-COMP benchmark and Antrl4 unit tests that provide evidence that on-demand re-execution can provide performance gains particularly when slice size is small and execution size is large

    Waterfall: Primitives Generation on the Fly

    No full text
    Modern languages are typically supported by managed runtimes (Virtual Machines). Since VMs have to deal with many concepts such as memory management, abstract execution model and scheduling, they tend to be very complex. Additionally, VMs have to meet strong performance requirements. This demand of performance is one of the main reasons why many VMs are built statically. Thus, design decisions are frozen at compile time preventing changes at runtime. One clear example is the impossibility to dynamically adapt or change primitives of the VM once it has been compiled. In this work we present a toolchain that allows for altering and configuring components such as primitives and plug-ins at runtime. The main contribution is Waterfall, a dynamic and reflective translator from Slang, a restricted subset of Smalltalk, to native code. Waterfall generates primitives on demand and executes them on the fly. We validate our approach by implementing dynamic primitive modification and runtime customization of VM plug-ins

    Focused Dynamic Slicing for Large Applications using an Abstract Memory-Model

    Full text link
    Dynamic slicing techniques compute program dependencies to find all statements that affect the value of a variable at a program point for a specific execution. Despite their many potential uses, applicability is limited by the fact that they typically cannot scale beyond small-sized applications. We believe that at the heart of this limitation is the use of memory references to identify data-dependencies. Particularly, working with memory references hinders distinct treatment of the code-to-be-sliced (e.g., classes the user has an interest in) from the rest of the code (including libraries and frameworks). The ability to perform a coarser-grained analysis for the code that is not under focus may provide performance gains and could become one avenue toward scalability. In this paper, we propose a novel approach that completely replaces memory reference registering and processing with a memory analysis model that works with program symbols (i.e., terms). In fact, this approach enables the alternative of not instrumenting -- thus, not generating any trace -- for code that is not part of the code-to-be-sliced. We report on an implementation of an abstract dynamic slicer for C\#, \textit{DynAbs}, and an evaluation that shows how large and relevant parts of Roslyn and Powershell -- two of the largest and modern C\# applications that can be found in GitHub -- can be sliced for their test cases assertions in at most a few minutes. We also show how reducing the code-to-be-sliced focus can bring important speedups with marginal relative precision loss

    A Metaobject Protocol for Optimizing Application-Specific Run-Time Variability

    Get PDF
    Just-in-time compilers and their aggressive speculative optimizations reduced the performance gap between dynamic and static languages drastically. To successfully speculate, compilers rely on the program variability observed at run time to be low, and use heuristics to determine when optimization is beneficial. However, some variability patterns are hard to capture with heuristics. Specifically, ephemeral, warmup, rare, and highly indirect variability are challenges for today's compiler heuristics. As a consequence, they can lead to reduced application performance. However, these types of variability are identifiable at the application level and could be mitigated with information provided by developers. As a solution, we propose a metaobject protocol for dynamic compilation systems to enable application developers to provide such information at run time. As a proof of concept, we demonstrate performance improvements for a few scenarios in a dynamic language built on top of the Truffle and Graal system
    • …
    corecore